50 research outputs found
Multi-view monocular pose estimation for spacecraft relative navigation
This paper presents a method of estimating the pose of a non-cooperative target for spacecraft rendezvous applications employing exclusively a monocular camera and a threedimensional model of the target. This model is used to build an offline database of prerendered keyframes with known poses. An online stage solves the model-to-image registration problem by matching two-dimensional point and edge features from the camera to the database. We apply our method to retrieve the motion of the now inoperational satellite ENVISAT. The combination of both feature types is shown to produce a robust pose solution even for large displacements respective to the keyframes which does not rely on real-time rendering, making it attractive for autonomous systems applications
Histogram of distances for local surface description
3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance
Recommended from our members
Unscented Kalman Filter for Vision Based Target Localisation with a Quadrotor
Unmanned aerial vehicles (UAV) equipped with a navigation system and an embedded camera can be used to estimate the position of a desired target. The relative position of the UAV along with knowledge of camera orientation and imagery data can be used to produce bearing measurements that allow estimation of target position. The filter methods applied are prone to biases due to noisy measurements. Further noise may be encountered depending on the UAV trajectory for target localisation. This work presents the implementation of an Unscented Kalman Filter (UKF) to estimate the position of a target on the 3D cartesian plane within a small indoor scenario. A small UAV with a single board computer, equipped with a frontal camera and moving in an oval trajectory at a fixed height was employed. Such a trajectory enabled an experimental comparison of UAV simulation data with UAV real-time flight data for indoor conditions. Optitrack Motion system and the Robot Operative System (ROS) were used to retrieve the drone position and exchange information at high rates
Autonomous navigation for mobility scooters: a complete framework based on open-source software
In recent years, there has been a growing demand for small vehicles targeted at users with mobility restrictions and designed to operate on pedestrian areas. The users of these vehicles are generally required to be in control for the entire duration of their journey, but a lot more people could benefit from them if some of the driving tasks could be automated. In this scenario, we set out to develop an autonomous mobility scooter, with the aim to understand the commercial feasibility of a similar product.
This paper reports on the progress of this project, proposing a framework for autonomous navigation on pedestrian areas, and focusing in particular on the construction of suitable costmaps. The proposed framework is based on open-source software, including a library created by the authors for the generation of costmaps
Real-time multiview data fusion for object tracking with RGBD sensors
This paper presents a new approach to accurately track a moving vehicle with a multiview setup of red-green-blue depth (RGBD) cameras. We first propose a correction method to eliminate a shift, which occurs in depth sensors when they become worn. This issue could not be otherwise corrected with the ordinary calibration procedure. Next, we present a sensor-wise filtering system to correct for an unknown vehicle motion. A data fusion algorithm is then used to optimally merge the sensor-wise estimated trajectories. We implement most parts of our solution in the graphic processor. Hence, the whole system is able to operate at up to 25 frames per second with a configuration of five cameras. Test results show the accuracy we achieved and the robustness of our solution to overcome uncertainties in the measurements and the modelling
Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous
Research on developing deep learning techniques for autonomous spacecraft
relative navigation challenges is continuously growing in recent years.
Adopting those techniques offers enhanced performance. However, such approaches
also introduce heightened apprehensions regarding the trustability and security
of such deep learning methods through their susceptibility to adversarial
attacks. In this work, we propose a novel approach for adversarial attack
detection for deep neural network-based relative pose estimation schemes based
on the explainability concept. We develop for an orbital rendezvous scenario an
innovative relative pose estimation technique adopting our proposed
Convolutional Neural Network (CNN), which takes an image from the chaser's
onboard camera and outputs accurately the target's relative position and
rotation. We perturb seamlessly the input images using adversarial attacks that
are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack
detector is then built based on a Long Short Term Memory (LSTM) network which
takes the explainability measure namely SHapley Value from the CNN-based pose
estimator and flags the detection of adversarial attacks when acting.
Simulation results show that the proposed adversarial attack detector achieves
a detection accuracy of 99.21%. Both the deep relative pose estimator and
adversarial attack detector are then tested on real data captured from our
laboratory-designed setup. The experimental results from our
laboratory-designed setup demonstrate that the proposed adversarial attack
detector achieves an average detection accuracy of 96.29%
Visualisation de nuages de points 3D acquis par un laser : impact de la vision stĂ©rĂ©oscopique et du suivi de la tĂȘte sur une recherche de cible
International audienceIn military context, we study the visualization of 3D point clouds sense by a drone flying around a battlefield area equipped with a 3D laser. We propose a user experiment protocol in order to study the effect of stereo and head tracking rendering on a target research task with a virtual reality headset. Our first results show a clear advantage when stereo and head tracking rendering are activated.Nous eÌtudions dans un contexte militaire la visualisation de nuages de points 3D acquis par un laser embarqueÌ sur un drone tournant autour dâune zone de combat. Nous proposons un protocole expeÌrimental permettant dâeÌvaluer lâimpact de la vision steÌreÌoscopique et du suivi de la teÌte sur une recherche de cible avec un casque de reÌaliteÌ virtuelle. Les premiers reÌsultats montrent un avantage lorsque la vision steÌreÌoscopique et le suivi de la teÌte sont activeÌs
Recommended from our members
Evaluating 3D local descriptors for future LIDAR missiles with automatic target recognition capabilities
Future light detection and ranging seeker missiles incorporating 3D automatic target recognition (ATR) capabilities can improve the missileâs effectiveness in complex battlefield environments. Considering the progress of local 3D descriptors in the computer vision domain, this paper evaluates a number of these on highly credible simulated air-to-ground missile engagement scenarios. The latter take into account numerous parameters that have not been investigated yet by the literature including variable missile â target range, 6-degrees-of-freedom missile motion and atmospheric disturbances. Additionally, the evaluation process utilizes our suggested 3D ATR architecture that compared to current pipelines involves more post-processing layers aiming at further enhancing 3D ATR performance. Our trials reveal that computer vision algorithms are appealing for missile-oriented 3D ATR